ABSTRACT
The need for summarizing long medical scan videos for automatic triage in Emergency Departments and transmission of the summarized videos for telemedicine has gained significance during the COVID-19 pandemic. However, supervised learning schemes for summarizing videos are infeasible as manual labeling of scans for large datasets is impractical by frontline clinicians. This work presents a methodology to summarize ultrasound videos using completely unsupervised learning schemes and is validated on Lung Ultrasound videos. A Convolutional Autoencoder and a Transformer decoder is trained in an unsupervised reinforcement learning setup i.e., without supervised labels in the whole workflow. Novel precision and recall computation for ultrasound videos is also presented employing which high Precision and F1 scores of 64.36% and 35.87% with an average video compression rate of 78% is obtained when validated against clinically annotated cases. Even though demonstrated using lung ultrasound videos, our approach can be readily extended to other imaging modalities. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
ABSTRACT
With the advent of the COVID-19 pandemic, the health care system has suffered a tremendous setback, especially in developing countries. Q Learning is a leading and widely used Reinforcement Learning scheme. Q-Learning can be applied to a variety of real-time applications. This paper proposes a health care system based on the Q learning algorithm and its implementation for a set of targeted nodes. The proposed system consists of 4 phases, front-end system, RRH, BBU pool, and computing. Each phase consists of different network components that are considered nodes. We have different rewards for nodes and pathways. The AI agent will continue to change its approach for future actions. © 2022 IEEE.
ABSTRACT
The ongoing COVID-19 pandemic has overloaded current healthcare systems, including radiology systems and departments. Machine learning-based medical imaging diagnostic approaches play an important role in tracking the spread of this virus, identifying high-risk patients, and controlling infections in real-time. Researchers aggregate radiographic samples from different data sources to establish a multi-source learning scheme to mitigate the insufficiency of COVID-19 samples from individual hospitals, especially in the early stage of the disease. However, data heterogeneity across different clinical centers with various imaging conditions is considered a significant limitation in model performance. This paper proposes a contrastive learning scheme for the automatic diagnosis of COVID-19 to effectively mitigate data heterogeneity in multi-source data and learn a robust and generalizable model. Inspired by advances in domain adaptation, we employ contrastive training objectives to promote intra-class cohesion across different data sources and inter-class separation of infected and non-infected cases. Extensive experiments on two public COVID-19 CT datasets demonstrate the effectiveness of the proposed method for tackling data heterogeneity problems with boosted diagnosis performance. Moreover, benefiting from the contrastive learning framework, our method can be generalized to solve data heterogeneity problems under a broader multi-source learning setting. © 2021 IEEE